20 research outputs found

    Privacy-Preserving Distributed Processing Over Networks

    Get PDF

    Privacy-Preserving Distributed Optimization via Subspace Perturbation: A General Framework

    Get PDF
    As the modern world becomes increasingly digitized and interconnected, distributed signal processing has proven to be effective in processing its large volume of data. However, a main challenge limiting the broad use of distributed signal processing techniques is the issue of privacy in handling sensitive data. To address this privacy issue, we propose a novel yet general subspace perturbation method for privacy-preserving distributed optimization, which allows each node to obtain the desired solution while protecting its private data. In particular, we show that the dual variables introduced in each distributed optimizer will not converge in a certain subspace determined by the graph topology. Additionally, the optimization variable is ensured to converge to the desired solution, because it is orthogonal to this non-convergent subspace. We therefore propose to insert noise in the non-convergent subspace through the dual variable such that the private data are protected, and the accuracy of the desired solution is completely unaffected. Moreover, the proposed method is shown to be secure under two widely-used adversary models: passive and eavesdropping. Furthermore, we consider several distributed optimizers such as ADMM and PDMM to demonstrate the general applicability of the proposed method. Finally, we test the performance through a set of applications. Numerical tests indicate that the proposed method is superior to existing methods in terms of several parameters like estimated accuracy, privacy level, communication cost and convergence rate

    A Privacy-Preserving Asynchronous Averaging Algorithm based on Shamir’s Secret Sharing

    Get PDF

    Privacy-Preserving Distributed Optimization via Subspace Perturbation:A General Framework

    Get PDF

    Communication efficient privacy-preserving distributed optimization using adaptive differential quantization

    Get PDF
    Privacy issues and communication cost are both major concerns in distributed optimization. There is often a trade-off between them because the encryption methods required for privacy-preservation often incur expensive communication bandwidth. To address this issue, we, in this paper, propose a quantization-based approach to achieve both communication efficient and privacy-preserving solutions in the context of distributed optimization. By deploying an adaptive differential quantization scheme, we allow each node in the network to achieve its optimum solution with a low communication cost while keeping its private data unrevealed. Additionally, the proposed approach is general and can be applied in various distributed optimization methods, such as the primal-dual method of multipliers (PDMM) and the alternating direction method of multipliers (ADMM). Moveover, we consider two widely used adversary models: passive and eavesdropping. Finally, we investigate the properties of the proposed approach using different applications and demonstrate its superior performance in terms of several parameters including accuracy, privacy, and communication cost

    Privacy-Preserving Distributed Expectation Maximization for Gaussian Mixture Model using Subspace Perturbation

    Full text link
    Privacy has become a major concern in machine learning. In fact, the federated learning is motivated by the privacy concern as it does not allow to transmit the private data but only intermediate updates. However, federated learning does not always guarantee privacy-preservation as the intermediate updates may also reveal sensitive information. In this paper, we give an explicit information-theoretical analysis of a federated expectation maximization algorithm for Gaussian mixture model and prove that the intermediate updates can cause severe privacy leakage. To address the privacy issue, we propose a fully decentralized privacy-preserving solution, which is able to securely compute the updates in each maximization step. Additionally, we consider two different types of security attacks: the honest-but-curious and eavesdropping adversary models. Numerical validation shows that the proposed approach has superior performance compared to the existing approach in terms of both the accuracy and privacy level

    Convex optimization-based Privacy-Preserving Distributed Least Squares via Subspace Perturbation

    No full text
    Over the past decades, privacy-preservation has received considerable attention, not only as a consequence of regulations such as the General Data Protection Regulation in the EU, but also from the fact that people are more concerned about data abuse as the world is becoming increasingly digitized. In this paper we propose a convex optimization-based subspace perturbation approach to solve privacy-preserving distributed least squares problems. Based on the primal-dual method of multipliers, the introduced dual variables will only converge in a subspace determined by the graph topology and do not converge in its orthogonal complement. We, therefore, propose to exploit this property for privacy-preservation by using the nonconverging part of the dual variables to perturb the private data, thereby protecting it from being revealed. Moreover, we prove that the proposed approach is secure under both eavesdropping and passive adversaries. Computer simulations are conducted to demonstrate the benefits of the proposed approach through its convergence properties and accuracy

    Two for the price of one: communication efficient and privacy-preserving distributed average consensus using quantization

    No full text
    Both communication overhead and privacy are main concerns in designing distributed computing algorithms. It is very challenging to address them simultaneously as encryption methods required for privacy-preservation often incur high communication costs. In this paper, we argue that there is a fundamental link between communication efficiency and privacy-preservation through quantization. Based on the observation that quantization, which can save communication bandwidth, will introduce error into the system, we propose a novel privacy-preserving distributed average consensus algorithm which uses the error introduced by quantization as noise to obfuscate the private data for protecting it from being revealed to others. Similar to existing differential privacy based approaches, the proposed approach is robust and has low computational complexity in dealing with two widely considered adversary models: the passive and eavesdropping adversaries. In addition, the method is generally applicable to many distributed optimizers, like PDMM and (generalized) ADMM. We conduct numerical simulations to validate that the proposed approach has superior performance compared to existing algorithms in terms of accuracy, communication bandwidth and privacy.</p
    corecore